Blog

Autonomic Computing – HumanLess Computing

Since the advent of the Internet and with the exponential development of access to the network, the increasing dependence of users on network connectivity, the birth of increasingly popular online applications, the gigantic increase in the amount of information deposited on the network, the growing power of web-oriented programming languages, the birth and exponential development of social networking portals, the need for mobility of people.
All this has created over the years the need to develop increasingly powerful hardware infrastructures, the number and size of datacenters in the world has grown dramatically, the capacity of the network to transmit data has grown considerably, interconnections, network management technologies, the power of CPUs, and much more have grown.
In the same way, the need for highly qualified personnel to design HW infrastructures, to maintain them, to design applications, portals, databases, in short, the market around the Internet has become an economy in itself.
In the last two to three years, mergers between the giants of information technology have partly determined Wall Street indices, managing to hold back the fall of the markets.

The advent and exponential development of virtualization that has now spread to all Fortune 500, but which still struggles to penetrate SMEs only because of the necessary initial investment, has accustomed us to new paradigms of use of the HW infrastructure of a datacenter, bringing administrative advantages and automatisms that are increasingly easy to implement, in addition to the well-known economic advantages of reducing TCO.

In the last two years, the emergence of the Cloud Computing “trend”, which makes datacenter virtualization penetrate SMEs in the form of consumption use of computational power, has further changed the paradigm of use of HW infrastructure, making it even simpler, more manageable, more elastic and cheaper.

These revolutions still underway, which will transform all the current data center realities for the next 5 years (IDC and Gartner data), if on the one hand will bring enormous benefits of innovation development to all smaller companies, on the other hand they will create a void in the administrative staff who previously worked on infrastructures, as they are no longer as necessary as before. All the know-how of system administrators, developers, solution architects, all certifications, begin to lose value, all the university teachings of most universities will no longer be of any use. Computer engineers who have not been followed by professors in step with the times will find themselves having to update themselves before entering the world of work.

Let’s think of realities like wikipedia with the following statistics:

  • 50,000 http requests per second
  • 80,000 SQL queries per second
  • 7 million registered users
  • 18 million page objects in the English version
  • 250 million page links
  • 220 million revisions
  • 1.5 terabytes of compressed data

It is supported by

  • 200 application servers
  • 20 database servers
  • 70 cache servers
  • about 4 system administrators

Or facebook:

  • Number of MySQL servers – 1,800
  • Number of MySQL DBAs – 2
  • Number of Web servers – 10,000
  • Number of Memcached servers – 805

With 900 employees whose distribution in system administators is unknown.

In many of these extreme cases where the administration, expansion and maintenance of the fleet would put the most experienced of teams in crisis, it has been possible to streamline and automate all the procedures related to the management of the required power growth.

For example, the Amazon case, reported in an old post, the enormous growth of ecommerce has given rise to the need for greater manageability in the administration of systems by creating the cloud computing infrastructure internally which has then turned out to be an excellent sales product.

HumanLess Computing

In the short term, the test of greater efficiency of fault tolerant infrastructures will be to measure the human factor in its management or, possibly, the lack thereof. The near future will see computers and infrastructures that will no longer need humans for management and development. Here is the title humanless computing.

Obviously, the human intervenes in the design and implementation of the infrastructure capable of managing and growing by itself.

But let’s take a closer look at what these terms mean, starting with a presentation by Lew Tucker, vice president and CTO of the SUN Cloud Computing Initiative.

Autonomic Computing

Now let’s take a look at the IBM research center where research has been done in the field of autonomic computing for years. It must be said that theorizing is good, but letting yourself be overtaken by those who started doing research much later directly in the field is sad.

I want to report here a part of all the solutions that we have evaluated or only studied or only followed the development for a long time now.

VMWare : vCloud. An extension of the virtualization solutions of the vCenter infrastructure, which with elements such as the Orchestrator, is possible through the definition of workflows to program actions to events, as mentioned in an old post

Citrix : Citrix Cloud Center. An all-round HW SW solution that can be managed through tools such as the workflow studio suitable for scheduling actions to events recorded on the infrastructure

Novell: PlateSpin Orchestrate (recent name born from the acquisition of PlateSpin). A system capable of automating the procurement of resources through workflows designed on a case-by-case basis.

Cisco : Unified Computing. An infrastructure already mentioned in other old posts (1,2), capable of easily and elastically managing an entire datacenter

Amazon : Autoscalability. A series of webServices that, once correctly programmed, allows an application to increase and decrease computational resources based on the traffic recorded.

These commercial examples (there are many other open source solutions), arise from the key concept of being able to program the actions to be performed based on recorded events (Orchestration, WorkFlow), for example an overload of requests on a webserver requires a second webserver and a load balancer, an overload on a db server would require the activation of a second db server with a synchronization of data replication, etc etc.

But autonomic computing includes other features that we will see in the near future, artificial intelligence, capable of predicting the increase or decrease of the load based on the experience recorded by the system; release or add resources when they are not yet needed, because they are expected to be.

Author

fabio.cecaro

Leave a comment

Your email address will not be published. Required fields are marked *

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.